Deep Muti-Modal Generic Representation Auxiliary Learning Networks for End-to-End Radar Emitter Classification

نویسندگان

چکیده

Radar data mining is the key module for signal analysis, where patterns hidden inside of signals are gradually available in learning process and its superiority significant enhancing security radar emitter classification (REC) system. Owing to disadvantage that radio frequency fingerprinting (RFF) caused by imperfection emitter’s hardware difficult forge, current deep-learning REC methods based on techniques, e.g., convolutional neural network (CNN) long short term memory (LSTM) capture stable RFF features. In this paper, an online non-cooperative multi-modal generic representation auxiliary model, namely muti-modal networks (MGRALN), put forward. Multi-modal means multi-domain transformations unified a representation. After this, employed facilitate implicit information perform better model robustness, which achieved using genenation guide training learning. Online only once end-to-end. Non-cooperative denotes no demodulation techniques used before task. Experimental results measured civil aviation demonstrate proposed method enables one achieve superior performance.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Deep Learning for End-to-End Automatic Target Recognition from Synthetic Aperture Radar Imagery

The standard architecture of synthetic aperture radar (SAR) automatic target recognition (ATR) consists of three stages: detection, discrimination, and classification. In recent years, convolutional neural networks (CNNs) for SAR ATR have been proposed, but most of them classify target classes from a target chip extracted from SAR imagery, as a classification for the third stage of SAR ATR. In ...

متن کامل

End-to-end Video-level Representation Learning for Action Recognition

From the frame/clip-level feature learning to the videolevel representation building, deep learning methods in action recognition have developed rapidly in recent years. However, current methods suffer from the confusion caused by partial observation training, or without end-to-end learning, or restricted to single temporal scale modeling and so on. In this paper, we build upon two-stream ConvN...

متن کامل

TVM: End-to-End Optimization Stack for Deep Learning

Scalable frameworks, such as TensorFlow, MXNet, Caffe, and PyTorch drive the current popularity and utility of deep learning. However, these frameworks are optimized for a narrow range of server-class GPUs and deploying workloads to other platforms such as mobile phones, embedded devices, and specialized accelerators (e.g., FPGAs, ASICs) requires laborious manual effort. We propose TVM, an end-...

متن کامل

End-to-End Learning of Motion Representation for Video Understanding

Despite the recent success of end-to-end learned representations, hand-crafted optical flow features are still widely used in video analysis tasks. To fill this gap, we propose TVNet, a novel end-to-end trainable neural network, to learn optical-flow-like features from data. TVNet subsumes a specific optical flow solver, the TV-L1 method, and is initialized by unfolding its optimization iterati...

متن کامل

End-to-End Deep Learning for Driver Distraction Recognition

In this paper, an end-to-end deep learning solution for driver distraction recognition is presented. In the proposed framework, the features from pre-trained convolutional neural networks VGG-19 are extracted. Despite the variation in illumination conditions, camera position, driver’s ethnicity, and genders in our dataset, our best fine-tuned model, VGG-19 has achieved the highest test accuracy...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Aerospace

سال: 2022

ISSN: ['2226-4310']

DOI: https://doi.org/10.3390/aerospace9110732